58 research outputs found

    Deep Learning-Based Particle Detection and Instance Segmentation for Microscopy Images

    Get PDF
    Bildgebende mikroskopische Verfahren ermöglichen Forschern, Einblicke in komplexe, bisher unverstandene Prozesse zu gewinnen. Um den Forschern den Weg zu neuen Erkenntnissen zu erleichtern, sind hoch-automatisierte, vielseitige, genaue, benutzerfreundliche und zuverlĂ€ssige Methoden zur Partikeldetektion und Instanzsegmentierung erforderlich. Diese Methoden sollten insbesondere fĂŒr unterschiedliche Bildgebungsbedingungen und Anwendungen geeignet sein, ohne dass Expertenwissen fĂŒr Anpassungen erforderlich ist. Daher werden in dieser Arbeit eine neue auf Deep Learning basierende Methode zur Partikeldetektion und zwei auf Deep Learning basierende Methoden zur Instanzsegmentierung vorgestellt. Der Partikeldetektionsansatz verwendet einen von der PartikelgrĂ¶ĂŸe abhĂ€ngigen Hochskalierungs-Schritt und ein U-Net Netzwerk fĂŒr die semantische Segmentierung von Partikelmarkern. Nach der Validierung der Hochskalierung mit synthetisch erzeugten Daten wird die Partikeldetektionssoftware BeadNet vorgestellt. Die Ergebnisse auf einem Datensatz mit fluoreszierenden Latex-KĂŒgelchen zeigen, dass BeadNet Partikel genauer als traditionelle Methoden detektieren kann. Die beiden neuen Instanzsegmentierungsmethoden verwenden ein U-Net Netzwerk mit zwei Decodern und werden fĂŒr vier Objektarten und drei Mikroskopie-Bildgebungsverfahren evaluiert. FĂŒr die Evaluierung werden ein einzelner nicht balancierter Trainingsdatensatz und ein einzelner Satz von Postprocessing-Parametern verwendet. Danach wird die bessere Methode in der Cell Tracking Challenge weiter validiert, wobei mehrere Top-3-Platzierungen und fĂŒr sechs DatensĂ€tze eine mit einem menschlichen Experten vergleichbare Leistung erreicht werden. Außerdem wird die neue Instanzsegmentierungssoftware microbeSEG vorgestellt. microbeSEG verwendet, analog zu BeadNet, OMERO fĂŒr die Datenverwaltung und bietet Funktionen fĂŒr die Erstellung von Trainingsdaten, das Trainieren von Modellen, die Modellevaluation und die Modellanwendung. Die qualitativen Anwendungen von BeadNet und microbeSEG zeigen, dass beide Tools eine genaue Auswertung vieler verschiedener Mikroskopie-Bilddaten ermöglichen. Abschließend gibt diese Dissertation einen Ausblick auf den Bedarf an weiteren Richtlinien fĂŒr Bildanalyse-Wettbewerbe und Methodenvergleiche fĂŒr eine zielgerichtete zukĂŒnftige Methodenentwicklung

    Cell Segmentation and Tracking using CNN-Based Distance Predictions and a Graph-Based Matching Strategy

    Get PDF
    The accurate segmentation and tracking of cells in microscopy image sequences is an important task in biomedical research, e.g., for studying the development of tissues, organs or entire organisms. However, the segmentation of touching cells in images with a low signal-to-noise-ratio is still a challenging problem. In this paper, we present a method for the segmentation of touching cells in microscopy images. By using a novel representation of cell borders, inspired by distance maps, our method is capable to utilize not only touching cells but also close cells in the training process. Furthermore, this representation is notably robust to annotation errors and shows promising results for the segmentation of microscopy images containing in the training data underrepresented or not included cell types. For the prediction of the proposed neighbor distances, an adapted U-Net convolutional neural network (CNN) with two decoder paths is used. In addition, we adapt a graph-based cell tracking algorithm to evaluate our proposed method on the task of cell tracking. The adapted tracking algorithm includes a movement estimation in the cost function to re-link tracks with missing segmentation masks over a short sequence of frames. Our combined tracking by detection method has proven its potential in the IEEE ISBI 2020 Cell Tracking Challenge (http://celltrackingchallenge.net/) where we achieved as team KIT-Sch-GE multiple top three rankings including two top performances using a single segmentation model for the diverse data sets.Comment: 25 pages, 14 figures, methods of the team KIT-Sch-GE for the IEEE ISBI 2020 Cell Tracking Challeng

    Gradient-Based Surface Reconstruction and the Application to Wind Waves

    Get PDF
    New gradient-based surface reconstruction techniques are presented: regularized least absolute deviations based methods using common discrete differential operators, and spline based methods. All new methods are formulated in the same mathematical framework as convex optimization problems and can handle non-rectangular domains. For the spline based methods, either common P-splines or P1-splines can be used. Extensive reconstruction error analysis shows that the new P1-spline based method is superior to conventional methods in the case of gradient fields corrupted with outliers. In the analysis, both spline based methods provide the lowest reconstruction errors for reconstructions from incomplete gradient fields. Furthermore, the pre-processing of gradient fields is investigated. Median filter pre-processing offers a computationally efficient method that is robust to outliers. After the reconstruction error analysis, selected reconstruction methods are applied to imaging slope gauge data measured in the wind-wave facility Aeolotron in Heidelberg. Using newly developed segmentation methods, it is possible to detect different coordinate system orientations of gradient field data and reconstruction algorithms. In addition, the use of a zero slope correction for reconstructions from the provided imaging slope gauge data is justified. The impact of light refracting bubbles on reconstructions from this data is part of this thesis as well. Finally, some water surface reconstructions for measurement conditions with different fetch lengths at the same wind speed in the Aeolotron are shown

    A graph-based cell tracking algorithm with few manually tunable parameters and automated segmentation error correction

    Get PDF
    Automatic cell segmentation and tracking enables to gain quantitative insights into the processes driving cell migration. To investigate new data with minimal manual effort, cell tracking algorithms should be easy to apply and reduce manual curation time by providing automatic correction of segmentation errors. Current cell tracking algorithms, however, are either easy to apply to new data sets but lack automatic segmentation error correction, or have a vast set of parameters that needs either manual tuning or annotated data for parameter tuning. In this work, we propose a tracking algorithm with only few manually tunable parameters and automatic segmentation error correction. Moreover, no training data is needed. We compare the performance of our approach to three well-performing tracking algorithms from the Cell Tracking Challenge on data sets with simulated, degraded segmentation—including false negatives, over- and under-segmentation errors. Our tracking algorithm can correct false negatives, over- and under-segmentation errors as well as a mixture of the aforementioned segmentation errors. On data sets with under-segmentation errors or a mixture of segmentation errors our approach performs best. Moreover, without requiring additional manual tuning, our approach ranks several times in the top 3 on the 6(th) edition of the Cell Tracking Challenge

    Automated Annotator Variability Inspection for Biomedical Image Segmentation

    Get PDF
    Supervised deep learning approaches for automated diagnosis support require datasets annotated by experts. Intra-annotator variability of a single annotator and inter-annotator variability between annotators can affect the quality of the diagnosis support. As medical experts will always differ in annotation details, quantitative studies concerning the annotation quality are of particular interest. A consistent and noise-free annotation of large-scale datasets by, for example, dermatologists or pathologists is a current challenge. Hence, methods are needed to automatically inspect annotations in datasets. In this paper, we categorize annotation noise in image segmentation tasks, present methods to simulate annotation noise, and examine the impact on the segmentation quality. Two novel automated methods to identify intra-annotator and inter-annotator inconsistencies based on uncertainty-aware deep neural networks are proposed. We demonstrate the benefits of our automated inspection methods such as focused re-inspection of noisy annotations or the detection of generally different annotation styles using the biomedical ISIC 2017 Melanoma image segmentation dataset

    Segregation of Dispersed Silica Nanoparticles in Microfluidic Water‐in‐Oil Droplets: A Kinetic Study

    Get PDF
    Dispersed negatively charged silica nanoparticles segregate inside microfluidic water-in-oil (W/O) droplets that are coated with a positively charged lipid shell. We report a methodology for the quantitative analysis of this self-assembly process. By using real-time fluorescence microscopy and automated analysis of the recorded images, kinetic data are obtained that characterize the electrostatically-driven self-assembly. We demonstrate that the segregation rates can be controlled by the installment of functional moieties on the nanoparticle’s surface, such as nucleic acid and protein molecules. We anticipate that our method enables the quantitative and systematic investigation of the segregation of (bio)functionalized nanoparticles in microfluidic droplets. This could lead to complex supramolecular architectures on the inner surface of micrometer-sized hollow spheres, which might be used, for example, as cell containers for applications in the life sciences

    BeadNet: Deep learning-based bead detection and counting in low-resolution microscopy images

    Get PDF
    Motivation An automated counting of beads is required for many high-throughput experiments such as studying mimicked bacterial invasion processes. However, state-of-the-art algorithms under- or overestimate the number of beads in low-resolution images. In addition, expert knowledge is needed to adjust parameters. Results In combination with our image labeling tool, BeadNet enables biologists to easily annotate and process their data reducing the expertise required in many existing image analysis pipelines. BeadNet outperforms state-of-the-art-algorithms in terms of missing, added and total amount of beads. Availability and implementation BeadNet (software, code and dataset) is available at https://bitbucket.org/t_scherr/beadnet. The image labeling tool is available at https://bitbucket.org/abartschat/imagelabelingtool

    Best Practices in Deep Learning-Based Segmentation of Microscopy Images

    Get PDF
    • 

    corecore